Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Medical named entity recognition model based on deep auto-encoding
Xudong HOU, Fei TENG, Yi ZHANG
Journal of Computer Applications    2022, 42 (9): 2686-2692.   DOI: 10.11772/j.issn.1001-9081.2021071317
Abstract227)   HTML18)    PDF (979KB)(100)       Save

With the deepening of the network in the Medical Named Entity Recognition (MNER) problem, the recognition accuracy and computing power requirements of the deep learning-based recognition models are unbalanced. Aiming at this problem, a medical named entity recognition model CasSAttMNER (Cascade Self-Attention Medical Named Entity Recognition) based on deep auto-encoding was proposed. Firstly, a depth difference balance strategy between encoding and decoding was used in the model, and the distilled Transformer language model RBT6 was used as the encoder to reduce the encoding depth and the computing power requirements for training and application. Then, Bidirectional Long Short-Term Memory (BiLSTM) network and Conditional Random Field (CRF) were used to propose a cascaded multi-task dual decoder to complete entity mention sequence labeling and entity class determination. Finally, based on the self-attention mechanism, the model design was optimized by effectively representing the implicit decoding information between the entity classes and the entity mentions. Experimental results show that the F value measurements of CasSAttMNER on two Chinese medical entity datasets can reach 0.943 9 and 0.945 7, which are 3 percentage points and 8 percentage points higher than those of the baseline model, respectively, verifying that this model further improves the decoder performance.

Table and Figures | Reference | Related Articles | Metrics